What is fastparallel?
The fastparallel npm package is designed to handle parallel function execution efficiently. It allows you to run multiple asynchronous tasks in parallel and collect their results once all tasks are completed. This can be particularly useful for scenarios where you need to perform multiple I/O-bound operations concurrently.
What are fastparallel's main functionalities?
Parallel Execution
This feature allows you to execute multiple asynchronous tasks in parallel. The provided code sample demonstrates how to run two tasks concurrently and collect their results once both tasks are completed.
const fastparallel = require('fastparallel')();
function task1(arg, cb) {
setTimeout(() => cb(null, 'result1'), 100);
}
function task2(arg, cb) {
setTimeout(() => cb(null, 'result2'), 200);
}
fastparallel(null, [task1, task2], 'argument', (err, results) => {
if (err) throw err;
console.log(results); // ['result1', 'result2']
});
Error Handling
This feature demonstrates how fastparallel handles errors in parallel tasks. If any task encounters an error, the final callback receives the error, and the results array is not populated.
const fastparallel = require('fastparallel')();
function task1(arg, cb) {
setTimeout(() => cb(null, 'result1'), 100);
}
function task2(arg, cb) {
setTimeout(() => cb(new Error('Something went wrong'), null), 200);
}
fastparallel(null, [task1, task2], 'argument', (err, results) => {
if (err) {
console.error(err.message); // 'Something went wrong'
} else {
console.log(results);
}
});
Other packages similar to fastparallel
async
The async package provides a wide range of utilities for working with asynchronous JavaScript. It includes methods for parallel execution, series execution, and more. Compared to fastparallel, async offers a broader set of functionalities but may be less performant for simple parallel execution tasks.
bluebird
Bluebird is a fully-featured Promise library that includes methods for managing multiple promises in parallel. It offers more advanced features like promise cancellation and long stack traces. While it is more feature-rich, it may be overkill for simple parallel execution scenarios that fastparallel handles efficiently.
p-map
p-map is a smaller utility that maps over promises concurrently. It allows you to control the concurrency level, making it more flexible for certain use cases. However, it is less specialized than fastparallel for simple parallel function execution.
fastparallel
Zero-overhead parallel function call for node.js. Also supports each
and map!
Benchmark for doing 3 calls setImmediate
1 million times:
- non-reusable
setImmediate
: 2172ms async.parallel
: 5739msasync.each
: 3015msasync.map
: 4981msparallelize
: 3125msfastparallel
with results: 2391msfastparallel
without results: 2350msfastparallel
map: 2351msfastparallel
each: 2359ms
These benchmarks where taken via bench.js
on iojs 2.2.1, on a MacBook
Pro Retina 2014.
If you need zero-overhead series function call, check out
fastseries. If you need a fast work queue
check out fastq. If you need to run fast
waterfall calls, use fastfall.
Example for parallel call
var parallel = require('fastparallel')({
released: completed,
results: true
})
parallel(
{},
[something, something, something],
42,
done
)
function something (arg, cb) {
setImmediate(cb, null, 'myresult')
}
function done (err, results) {
console.log('parallel completed, results:', results)
}
function completed () {
console.log('parallel completed!')
}
Example for each and map calls
var parallel = require('fastparallel')({
released: completed,
results: true
})
parallel(
{},
something,
[1, 2, 3],
done
)
function something (arg, cb) {
setImmediate(cb, null, 'myresult')
}
function done (err, results) {
console.log('parallel completed, results:', results)
}
function completed () {
console.log('parallel completed!')
}
Caveats
The results
array will be non-ordered, and the done
function will
be called only once, even if more than one error happen.
This library works by caching the latest used function, so that running a new parallel
does not cause any memory allocations.
Why it is so fast?
-
This library is caching funcitons a lot.
-
V8 optimizations: thanks to caching, the functions can be optimized by V8 (if they are optimizable, and I took great care of making them so).
-
Don't use arrays if you just need a queue. A linked list implemented via processes is much faster if you don't need to access elements in between.
-
Accept passing a this for the functions. Thanks to this hack, you can extract your functions, and place them in a outer level where they are not created at every execution.
License
ISC